68 research outputs found

    What makes for effective detection proposals?

    Full text link
    Current top performing object detectors employ detection proposals to guide the search for objects, thereby avoiding exhaustive sliding window search across images. Despite the popularity and widespread use of detection proposals, it is unclear which trade-offs are made when using them during object detection. We provide an in-depth analysis of twelve proposal methods along with four baselines regarding proposal repeatability, ground truth annotation recall on PASCAL, ImageNet, and MS COCO, and their impact on DPM, R-CNN, and Fast R-CNN detection performance. Our analysis shows that for object detection improving proposal localisation accuracy is as important as improving recall. We introduce a novel metric, the average recall (AR), which rewards both high recall and good localisation and correlates surprisingly well with detection performance. Our findings show common strengths and weaknesses of existing methods, and provide insights and metrics for selecting and tuning proposal methods.Comment: TPAMI final version, duplicate proposals removed in experiment

    Unsupervised Learning of Edges

    Full text link
    Data-driven approaches for edge detection have proven effective and achieve top results on modern benchmarks. However, all current data-driven edge detectors require manual supervision for training in the form of hand-labeled region segments or object boundaries. Specifically, human annotators mark semantically meaningful edges which are subsequently used for training. Is this form of strong, high-level supervision actually necessary to learn to accurately detect edges? In this work we present a simple yet effective approach for training edge detectors without human supervision. To this end we utilize motion, and more specifically, the only input to our method is noisy semi-dense matches between frames. We begin with only a rudimentary knowledge of edges (in the form of image gradients), and alternate between improving motion estimation and edge detection in turn. Using a large corpus of video data, we show that edge detectors trained using our unsupervised scheme approach the performance of the same methods trained with full supervision (within 3-5%). Finally, we show that when using a deep network for the edge detector, our approach provides a novel pre-training scheme for object detection.Comment: Camera ready version for CVPR 201

    Daily Timed Sexual Interaction Induces Moderate Anticipatory Activity in Mice

    Get PDF
    Anticipation of resource availability is a vital skill yet it is poorly understood in terms of neuronal circuitry. Rodents display robust anticipatory activity in the several hours preceding timed daily access to food when access is limited to a short temporal duration. We tested whether this anticipatory behavior could be generalized to timed daily social interaction by examining if singly housed male mice could anticipate either a daily novel female or a familiar female. We observed that anticipatory activity was moderate under both conditions, although both a novel female partner and sexual experience are moderate contributing factors to increasing anticipatory activity. In contrast, restricted access to running wheels did not produce any anticipatory activity, suggesting that an increase in activity during the scheduled access time was not sufficient to induce anticipation. To tease apart social versus sexual interaction, we tested the effect of exposing singly housed female mice to a familiar companion female mouse daily. The female mice did not show anticipatory activity for restricted female access, despite a large amount of social interaction, suggesting that daily timed social interaction between mice of the same gender is insufficient to induce anticipatory activity. Our study demonstrates that male mice will show anticipatory activity, albeit inconsistently, for a daily timed sexual encounter

    Learning to Segment Every Thing

    Full text link
    Most methods for object instance segmentation require all training examples to be labeled with segmentation masks. This requirement makes it expensive to annotate new categories and has restricted instance segmentation models to ~100 well-annotated classes. The goal of this paper is to propose a new partially supervised training paradigm, together with a novel weight transfer function, that enables training instance segmentation models on a large set of categories all of which have box annotations, but only a small fraction of which have mask annotations. These contributions allow us to train Mask R-CNN to detect and segment 3000 visual concepts using box annotations from the Visual Genome dataset and mask annotations from the 80 classes in the COCO dataset. We evaluate our approach in a controlled study on the COCO dataset. This work is a first step towards instance segmentation models that have broad comprehension of the visual world
    • …
    corecore